Goto

Collaborating Authors

 generating correct answer


Review for NeurIPS paper: Generating Correct Answers for Progressive Matrices Intelligence Tests

Neural Information Processing Systems

Weaknesses: My first concern is that this model seems far from minimalism. Generating correct answer for RPM is an interesting task. But one of the reasons it is interesting to the current AI community is that humans can somehow generate some results correctly without huge amount of training. Although this work demonstrates the possibility of generator that can show some reasoning capability, I highly speculate that this is a distillation from the subnetworks for context extraction, which is trained with strong supervision. There is still a long distance from this model and human brain. The latter one is believed to be designed by nature following minimalism.


Review for NeurIPS paper: Generating Correct Answers for Progressive Matrices Intelligence Tests

Neural Information Processing Systems

I have read the reviews and the author response and I have also asked an expert AC to also provide a comment in lieu of a 4th reviewer (pasted below for reference). Taken all these together I will recommend acceptance, with a note. NOTE TO AUTHORS: This work is going to be the reference paper for using generation as opposed to discrimination. As such, it is really crucial to set the right path for evaluating model in a fair and rigorous way, so that research that follows on builds on a solid base. The presented evaluation has some issues (see points bellow).


Generating Correct Answers for Progressive Matrices Intelligence Tests

Neural Information Processing Systems

Raven's Progressive Matrices are multiple-choice intelligence tests, where one tries to complete the missing location in a 3x3 grid of abstract images. Previous attempts to address this test have focused solely on selecting the right answer out of the multiple choices. In this work, we focus, instead, on generating a correct answer given the grid, which is a harder task, by definition. The proposed neural model combines multiple advances in generative models, including employing multiple pathways through the same network, using the reparameterization trick along two pathways to make their encoding compatible, a selective application of variational losses, and a complex perceptual loss that is coupled with a selective backpropagation procedure. Our algorithm is able not only to generate a set of plausible answers but also to be competitive to the state of the art methods in multiple-choice tests.